Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add filters

Database
Language
Document Type
Year range
1.
Inform Med Unlocked ; 32: 101004, 2022.
Article in English | MEDLINE | ID: covidwho-1983243

ABSTRACT

The contagious SARS-CoV-2 has had a tremendous impact on the life and health of many communities. It was first rampant in early 2019 and so far, 539 million cases of COVID-19 have been reported worldwide. This is reminiscent of the 1918 influenza pandemic. However, we can detect the infected cases of COVID-19 by analysing either X-rays or CT, which are presumably considered the least expensive methods. In the existence of state-of-the-art convolutional neural networks (CNNs), which integrate image pre-processing techniques with fully connected layers, we can develop a sophisticated AI system contingent on various pre-trained models. Each pre-trained model we involved in our study assumed its role in extracting some specific features from different chest image datasets in many verified sources, such as (Mendeley, Kaggle, and GitHub). First, for CXR datasets associated with the CNN trained model from the beginning, whereby is comprised of four layers beginning with the Conv2D layer, which comprises 32 filters, followed by the MaxPooling and afterwards, we reiterated similarly. We used two techniques to avoid overgeneralization, the early stopping and the Dropout techniques. After all, the output was one neuron to classify both cases of 0 or 1, followed by a sigmoid function; in addition, we used the Adam optimizer owing to the more improved outcomes than what other optimizers conducted; ultimately, we referred to our findings by using a confusion matrix, classification report (Recall & Precision), sensitivity and specificity; in this approach, we achieved a classification accuracy of 96%. Our three integrated pre-trained models (VGG16, DenseNet201, and DenseNet121) yielded a remarkable test accuracy of 98.81%. Besides, our merged models (VGG16, DenseNet201) trained on CT images with the utmost effort; this model held an accurate test of 99.73% for binary classification with the (Normal/Covid-19) scenario. Comparing our results with related studies shows that our proposed models were superior to the previous CNN machine learning models in terms of various performance metrics. Our pre-trained model associated with the CT dataset achieved 100% of the F1score and the loss value was approximately 0.00268.

2.
Knowl Based Syst ; 253: 109539, 2022 Oct 11.
Article in English | MEDLINE | ID: covidwho-1966919

ABSTRACT

Alongside the currently used nasal swab testing, the COVID-19 pandemic situation would gain noticeable advantages from low-cost tests that are available at any-time, anywhere, at a large-scale, and with real time answers. A novel approach for COVID-19 assessment is adopted here, discriminating negative subjects versus positive or recovered subjects. The scope is to identify potential discriminating features, highlight mid and short-term effects of COVID on the voice and compare two custom algorithms. A pool of 310 subjects took part in the study; recordings were collected in a low-noise, controlled setting employing three different vocal tasks. Binary classifications followed, using two different custom algorithms. The first was based on the coupling of boosting and bagging, with an AdaBoost classifier using Random Forest learners. A feature selection process was employed for the training, identifying a subset of features acting as clinically relevant biomarkers. The other approach was centered on two custom CNN architectures applied to mel-Spectrograms, with a custom knowledge-based data augmentation. Performances, evaluated on an independent test set, were comparable: Adaboost and CNN differentiated COVID-19 positive from negative with accuracies of 100% and 95% respectively, and recovered from negative individuals with accuracies of 86.1% and 75% respectively. This study highlights the possibility to identify COVID-19 positive subjects, foreseeing a tool for on-site screening, while also considering recovered subjects and the effects of COVID-19 on the voice. The two proposed novel architectures allow for the identification of biomarkers and demonstrate the ongoing relevance of traditional ML versus deep learning in speech analysis.

3.
Results Phys ; 27: 104495, 2021 Aug.
Article in English | MEDLINE | ID: covidwho-1525938

ABSTRACT

The first known case of Coronavirus disease 2019 (COVID-19) was identified in December 2019. It has spread worldwide, leading to an ongoing pandemic, imposed restrictions and costs to many countries. Predicting the number of new cases and deaths during this period can be a useful step in predicting the costs and facilities required in the future. The purpose of this study is to predict new cases and deaths rate one, three and seven-day ahead during the next 100 days. The motivation for predicting every n days (instead of just every day) is the investigation of the possibility of computational cost reduction and still achieving reasonable performance. Such a scenario may be encountered in real-time forecasting of time series. Six different deep learning methods are examined on the data adopted from the WHO website. Three methods are LSTM, Convolutional LSTM, and GRU. The bidirectional extension is then considered for each method to forecast the rate of new cases and new deaths in Australia and Iran countries. This study is novel as it carries out a comprehensive evaluation of the aforementioned three deep learning methods and their bidirectional extensions to perform prediction on COVID-19 new cases and new death rate time series. To the best of our knowledge, this is the first time that Bi-GRU and Bi-Conv-LSTM models are used for prediction on COVID-19 new cases and new deaths time series. The evaluation of the methods is presented in the form of graphs and Friedman statistical test. The results show that the bidirectional models have lower errors than other models. A several error evaluation metrics are presented to compare all models, and finally, the superiority of bidirectional methods is determined. This research could be useful for organisations working against COVID-19 and determining their long-term plans.

4.
Biomed Signal Process Control ; 68: 102583, 2021 Jul.
Article in English | MEDLINE | ID: covidwho-1163451

ABSTRACT

Due to the unforeseen turn of events, our world has undergone another global pandemic from a highly contagious novel coronavirus named COVID-19. The novel virus inflames the lungs similarly to Pneumonia, making it challenging to diagnose. Currently, the common standard to diagnose the virus's presence from an individual is using a molecular real-time Reverse-Transcription Polymerase Chain Reaction (rRT-PCR) test from fluids acquired through nasal swabs. Such a test is difficult to acquire in most underdeveloped countries with a few experts that can perform the test. As a substitute, the widely available Chest X-Ray (CXR) became an alternative to rule out the virus. However, such a method does not come easy as the virus still possesses unknown characteristics that even experienced radiologists and other medical experts find difficult to diagnose through CXRs. Several studies have recently used computer-aided methods to automate and improve such diagnosis of CXRs through Artificial Intelligence (AI) based on computer vision and Deep Convolutional Neural Networks (DCNN), which some require heavy processing costs and other tedious methods to produce. Therefore, this work proposed the Fused-DenseNet-Tiny, a lightweight DCNN model based on a densely connected neural network (DenseNet) truncated and concatenated. The model trained to learn CXR features based on transfer learning, partial layer freezing, and feature fusion. Upon evaluation, the proposed model achieved a remarkable 97.99 % accuracy, with only 1.2 million parameters and a shorter end-to-end structure. It has also shown better performance than some existing studies and other massive state-of-the-art models that diagnosed COVID-19 from CXRs.

SELECTION OF CITATIONS
SEARCH DETAIL